How Many Distribution Functions Are There? Bracketing Entropy Bounds for High Dimensional Distribution Functions
نویسندگان
چکیده
This means that every F ∈ Fd satisfies: (i) (non-negativity). For finite intervals I = (a1, b1] × · · · × (ad, bd] ≡ (a, b], with a, b ∈ Rd, F (I) = ∆dF (a, b] ≥ 0 where ∆d denotes the d−dimensional difference operator. (ii) (continuity from above). If y ↓ x, then F (y) ↓ F (x). (iii) (normalization). If x1∧. . .∧xd → −∞, then F (x) → 0; if x1∧. . .∧xd → +∞, then F (x) → 1. (See Billingsley [4], page 265; or see e.g. Breiman [9], page 27, for analogous properties if distribution functions are defined, alternatively, by F (x) = P (X < x).) It follows from the above that Fd is a subclass of the collection BVH(R), the class of all functions of bounded variation in the sense of Hardy-Krause (see Clarkson and Adams [13], page 825, and Hildebrandt [20], chapter 3, for the case d = 2). [Most of the current literature on functions of bounded variation on Rd seems to concern the classes BV (Rd) ≡ BVT (Rd) of bounded variation in the sense of Tonelli (see Clarkson and Adams [13] and Ziemer [30], chapter 5, definition 5.5.1, page 220, and his historical note 5.1, page 280.
منابع مشابه
Entropy Estimate For High Dimensional Monotonic Functions
We establish upper and lower bounds for the metric entropy and bracketing entropy of the class of d-dimensional bounded monotonic functions under L norms. It is interesting to see that both the metric entropy and bracketing entropy have different behaviors for p < d/(d − 1) and p > d/(d − 1). We apply the new bounds for bracketing entropy to establish a global rate of convergence of the MLE of ...
متن کاملBayesian Estimation of Shift Point in Shape Parameter of Inverse Gaussian Distribution Under Different Loss Functions
In this paper, a Bayesian approach is proposed for shift point detection in an inverse Gaussian distribution. In this study, the mean parameter of inverse Gaussian distribution is assumed to be constant and shift points in shape parameter is considered. First the posterior distribution of shape parameter is obtained. Then the Bayes estimators are derived under a class of priors and using variou...
متن کاملThe behavior of the reliability functions and stochastic orders in family of the Kumaraswamy-G distributions
The Kumaraswamy distribution is a two-parameter distribution on the interval (0,1) that is very similar to beta distribution. This distribution is applicable to many natural phenomena whose outcomes have lower and upper bounds, such as the proportion of people from society who consume certain products in a given interval. In this paper, we introduce the family of Kumaraswamy-G distribution, an...
متن کاملInvariance Principles for Dependent Processes Indexed by Besov Classes with an Application to a Hausman Test for Linearity
This paper considers functional central limit theorems for stationary absolutely regular mixing processes. Bounds for the entropy with bracketing are derived using recent results in Nickl and Pötscher (2007). More specifically, their bracketing metric entropy bounds are extended to a norm defined in Doukhan, Massart and Rio (1995, henceforth DMR) that depends both on the marginal distribution o...
متن کاملEstimation for the Type-II Extreme Value Distribution Based on Progressive Type-II Censoring
In this paper, we discuss the statistical inference on the unknown parameters and reliability function of type-II extreme value (EVII) distribution when the observed data are progressively type-II censored. By applying EM algorithm, we obtain maximum likelihood estimates (MLEs). We also suggest approximate maximum likelihood estimators (AMLEs), which have explicit expressions. We provide Bayes ...
متن کامل